949 research outputs found

    Current Lead Optimization for Cryogenic Operation at Intermediate Temperatures

    Get PDF

    Learning terminological Naïve Bayesian classifiers under different assumptions on missing knowledge

    Get PDF
    Knowledge available through Semantic Web standards can easily be missing, generally because of the adoption of the Open World Assumption (i.e. the truth value of an assertion is not necessarily known). However, the rich relational structure that characterizes ontologies can be exploited for handling such missing knowledge in an explicit way. We present a Statistical Relational Learning system designed for learning terminological naïve Bayesian classifiers, which estimate the probability that a generic individual belongs to the target concept given its membership to a set of Description Logic concepts. During the learning process, we consistently handle the lack of knowledge that may be introduced by the adoption of the Open World Assumption, depending on the varying nature of the missing knowledge itself

    A graph regularization based approach to transductive class-membership prediction

    Get PDF
    Considering the increasing availability of structured machine processable knowledge in the context of the Semantic Web, only relying on purely deductive inference may be limiting. This work proposes a new method for similarity-based class-membership prediction in Description Logic knowledge bases. The underlying idea is based on the concept of propagating class-membership information among similar individuals; it is non-parametric in nature and characterised by interesting complexity properties, making it a potential candidate for large-scale transductive inference. We also evaluate its effectiveness with respect to other approaches based on inductive inference in SW literature

    Backpropagating through Markov Logic Networks

    Get PDF
    We integrate Markov Logic networks with deep learning architectures operating on high-dimensional and noisy feature inputs. Instead of relaxing the discrete components into smooth functions, we propose an approach that allows us to backpropagate through standard statistical relational learning components using perturbation-based differentiation. The resulting hybrid models are shown to outperform models solely relying on deep learning based function fitting. We find that using noise perturbations is required to allow the proposed hybrid models to robustly learn from the training data

    Configurable DC current leads, with Peltier elements

    Get PDF
    There is interest in decreasing the thermal load to the cryogenic environment from the current leads. The cryogenic load is challenging both at the design current, as well as at part load operation, when the current is reduced or zero. In this paper we explore the combination of a Peltier elements and a novel concept of configurable current lead. The use of Peltier element reduces the cryogenic load by about 25%. The configurable concept is based on the use of multiple heat exchangers that allows the optimization of current leads when operating at various currents. When used together, Peltier/configurable current lead allows the reduction of the cryogenic load by a factor of 4 in low current/idle conditions. We also explore the transient operation of the current leads, as well as overload capacity.

    Cryostat Optimization Through Multiple Stage Thermal Shields

    Get PDF

    Grid-to-Graph: Flexible Spatial Relational Inductive Biases for Reinforcement Learning.

    Get PDF
    Although reinforcement learning has been successfully applied in many domains in recent years, we still lack agents that can systematically generalize. While relational inductive biases that fit a task can improve generalization of RL agents, these biases are commonly hard-coded directly in the agent's neural architecture. In this work, we show that we can incorporate relational inductive biases, encoded in the form of relational graphs, into agents. Based on this insight, we propose Grid-to-Graph (GTG), a mapping from grid structures to relational graphs that carry useful spatial relational inductive biases when processed through a Relational Graph Convolution Network (R-GCN). We show that, with GTG, R-GCNs generalize better both in terms of in-distribution and out-of-distribution compared to baselines based on Convolutional Neural Networks and Neural Logic Machines on challenging procedurally generated environments and MinAtar. Furthermore, we show that GTG produces agents that can jointly reason over observations and environment dynamics encoded in knowledge bases

    Grid-to-Graph: Flexible Spatial Relational Inductive Biases for Reinforcement Learning.

    Get PDF
    Although reinforcement learning has been successfully applied in many domains in recent years, we still lack agents that can systematically generalize. While relational inductive biases that fit a task can improve generalization of RL agents, these biases are commonly hard-coded directly in the agent's neural architecture. In this work, we show that we can incorporate relational inductive biases, encoded in the form of relational graphs, into agents. Based on this insight, we propose Grid-to-Graph (GTG), a mapping from grid structures to relational graphs that carry useful spatial relational inductive biases when processed through a Relational Graph Convolution Network (R-GCN). We show that, with GTG, R-GCNs generalize better both in terms of in-distribution and out-of-distribution compared to baselines based on Convolutional Neural Networks and Neural Logic Machines on challenging procedurally generated environments and MinAtar. Furthermore, we show that GTG produces agents that can jointly reason over observations and environment dynamics encoded in knowledge bases

    Superconducting DC Power Transmission and Distribution

    Get PDF
    corecore